Goto

Collaborating Authors

 Oceania Government


Retrofitters, pragmatists and activists: Public interest litigation for accountable automated decision-making

Fraser, Henry, Stardust, Zahra

arXiv.org Artificial Intelligence

This paper examines the role of public interest litigation in promoting accountability for AI and automated decision-making (ADM) in Australia. Since ADM regulation faces geopolitical headwinds, effective governance will have to rely at least in part on the enforcement of existing laws. Drawing on interviews with Australian public interest litigators, technology policy activists, and technology law scholars, the paper positions public interest litigation as part of a larger ecosystem for transparency, accountability and justice with respect to ADM. It builds on one participant's characterisation of litigation about ADM as an exercise in legal retrofitting: adapting old laws to new circumstances. The paper's primary contribution is to aggregate, organise and present original insights on pragmatic strategies and tactics for effective public interest litigation about ADM. Naturally, it also contends with the limits of these strategies, and of the Australian legal system. Where limits are, however, capable of being overcome, the paper presents findings on urgent needs: the enabling institutional arrangements without which effective litigation and accountability will falter. The paper is relevant to law and technology scholars; individuals and groups harmed by ADM; public interest litigators and technology lawyers; civil society and advocacy organisations; and policymakers.


AFP developing AI tool to decode gen Z slang amid warning about 'crimefluencers' hunting girls

The Guardian

Federal police say they have identified 59 alleged offenders as being in these online networks and have made an unspecified number of arrests. Federal police say they have identified 59 alleged offenders as being in these online networks and have made an unspecified number of arrests. Australian federal police will develop an AI tool to decode gen Z and Alpha slang and emojis in an effort to crackdown on sadistic online exploitation and "crimefluencers". The AFP commissioner, Krissy Barrett, used a speech at the National Press Club on Wednesday to warn of the rise of online crime networks of young boys and men who are targeting vulnerable teen and preteen girls. The newly appointed chief outlined how the perpetrators, who are overwhelmingly from English-speaking backgrounds, were grooming victims and then forcing them to "perform serious acts of violence on themselves, their siblings, others or their pets".


Lost in Translation: Policymakers are not really listening to Citizen Concerns about AI

Aaronson, Susan Ariel, Moreno, Michael

arXiv.org Artificial Intelligence

The worlds people have strong opinions about artificial intelligence (AI), and they want policymakers to listen. Governments are inviting public comment on AI, but as they translate input into policy, much of what citizens say is lost. Policymakers are missing a critical opportunity to build trust in AI and its governance. This paper compares three countries, Australia, Colombia, and the United States, that invited citizens to comment on AI risks and policies. Using a landscape analysis, the authors examined how each government solicited feedback and whether that input shaped governance. Yet in none of the three cases did citizens and policymakers establish a meaningful dialogue. Governments did little to attract diverse voices or publicize calls for comment, leaving most citizens unaware or unprepared to respond. In each nation, fewer than one percent of the population participated. Moreover, officials showed limited responsiveness to the feedback they received, failing to create an effective feedback loop. The study finds a persistent gap between the promise and practice of participatory AI governance. The authors conclude that current approaches are unlikely to build trust or legitimacy in AI because policymakers are not adequately listening or responding to public concerns. They offer eight recommendations: promote AI literacy; monitor public feedback; broaden outreach; hold regular online forums; use innovative engagement methods; include underrepresented groups; respond publicly to input; and make participation easier.


Societal Capacity Assessment Framework: Measuring Resilience to Inform Advanced AI Risk Management

Gandhi, Milan, Cihon, Peter, Larter, Owen, Anselmetti, Rebecca

arXiv.org Artificial Intelligence

Risk assessments for advanced AI systems require evaluating both the models themselves and their deployment contexts. We introduce the Societal Capacity Assessment Framework (SCAF), an indicators-based approach to measuring a society's vulnerability, coping capacity, and adaptive capacity in response to AI-related risks. SCAF adapts established resilience analysis methodologies to AI, enabling organisations to ground risk management in insights about country-level deployment conditions. It can also support stakeholders in identifying opportunities to strengthen societal preparedness for emerging AI capabilities. By bridging disparate literatures and the "context gap" in AI evaluation, SCAF promotes more holistic risk assessment and governance as advanced AI systems proliferate globally.


AMLNet: A Knowledge-Based Multi-Agent Framework to Generate and Detect Realistic Money Laundering Transactions

Huda, Sabin, Foo, Ernest, Jadidi, Zahra, Newton, MA Hakim, Sattar, Abdul

arXiv.org Artificial Intelligence

Anti-money laundering (AML) research is constrained by the lack of publicly shareable, regulation-aligned transaction datasets. We present AMLNet, a knowledge-based multi-agent framework with two coordinated units: a regulation-aware transaction generator and an ensemble detection pipeline. The generator produces 1,090,173 synthetic transactions (approximately 0.16\% laundering-positive) spanning core laundering phases (placement, layering, integration) and advanced typologies (e.g., structuring, adaptive threshold behavior). Regulatory alignment reaches 75\% based on AUSTRAC rule coverage (Section 4.2), while a composite technical fidelity score of 0.75 summarizes temporal, structural, and behavioral realism components (Section 4.4). The detection ensemble achieves F1 0.90 (precision 0.84, recall 0.97) on the internal test partitions of AMLNet and adapts to the external SynthAML dataset, indicating architectural generalizability across different synthetic generation paradigms. We provide multi-dimensional evaluation (regulatory, temporal, network, behavioral) and release the dataset (Version 1.0, https://doi.org/10.5281/zenodo.16736515), to advance reproducible and regulation-conscious AML experimentation.


Lawyer caught using AI-generated false citations in court case penalised in Australian first

The Guardian

A Victorian lawyer has become the first in Australia to face professional sanctions for using artificial intelligence in a court case, being stripped of his ability to practise as a principal lawyer after AI generated false citations that he had failed to verify. Guardian Australia reported in October last year that in a 19 July 2024 hearing, the anonymous solicitor representing a husband in a dispute between a married couple provided the court with a list of prior cases that had been requested by Justice Amanda Humphreys in relation to an enforcement application in the case. When Humphreys returned to her chambers, she said in a ruling that neither herself nor her associates were able to identify the cases in the list. When the matter returned to court the lawyer confirmed that the list had been prepared using legal software that utilised AI. He acknowledged he did not verify the accuracy of the information before submitting it to the court.


Australia moves to stamp out 'nudify' and stalking apps

Al Jazeera

Australia has announced plans to ban apps used for stalking and creating deepfake nudes. Tech platforms will be responsible for preventing access to "nudify" and undetectable online stalking tools under the reforms announced on Tuesday by the Australian government. Minister for Communications Anika Wells said Australia would work with firms to stamp out "abhorrent technologies" while ensuring "legitimate and consent-based" artificial intelligence (AI) and online tracking services were not adversely affected. "Abusive technologies are widely and easily accessible and are causing real and irreparable damage now," Wells said in a statement. "These new, evolving, technologies require a new, proactive, approach to harm prevention – and we'll work closely with industry to achieve this." "While this move won't eliminate the problem of abusive technology in one fell swoop, alongside existing laws and our world-leading online safety reforms, it will make a real difference in protecting Australians," she added.


Tax relief and Carmen Sandiego: Australia's once-dismissed video game industry is finally getting a leg-up

The Guardian

The idea that video games are not "serious things", says Ross Symons, overlooks the benefits they offer to gamers feeling isolated. "One thing that struck me during Covid is that games were the way that people connected and stayed together." The chief executive of Big Ant Studios, a Melbourne-based game developer, recalls when in 2010 the then opposition leader Tony Abbott dismissed the national broadband network as being for "internet-based television, video entertainment and gaming". Symons says that dismissiveness of the video game industry has not stood the test of time. Last year alone, Australians spent 3.8bn on video games, according to the Interactive Games and Entertainment Association (IGEA).


Signals from the Floods: AI-Driven Disaster Analysis through Multi-Source Data Fusion

Gong, Xian, McCarthy, Paul X., Tian, Lin, Rizoiu, Marian-Andrei

arXiv.org Artificial Intelligence

Massive and diverse web data are increasingly vital for government disaster response, as demonstrated by the 2022 floods in New South Wales (NSW), Australia. This study examines how X (formerly Twitter) and public inquiry submissions provide insights into public behaviour during crises. We analyse more than 55,000 flood-related tweets and 1,450 submissions to identify behavioural patterns during extreme weather events. While social media posts are short and fragmented, inquiry submissions are detailed, multi-page documents offering structured insights. Our methodology integrates Latent Dirichlet Allocation (LDA) for topic modelling with Large Language Models (LLMs) to enhance semantic understanding. LDA reveals distinct opinions and geographic patterns, while LLMs improve filtering by identifying flood-relevant tweets using public submissions as a reference. This Relevance Index method reduces noise and prioritizes actionable content, improving situ-ational awareness for emergency responders. By combining these complementary data streams, our approach introduces a novel AI-driven method to refine crisis-related social media content, improve real-time disaster response, and inform long-term resilience planning.


Ornithologist: Towards Trustworthy "Reasoning" about Central Bank Communications

Jones, Dominic Zaun Eu

arXiv.org Artificial Intelligence

I develop Ornithologist, a weakly-supervised textual classification system and measure the hawkishness and dovishness of central bank text. Ornithologist uses ``taxonomy-guided reasoning'', guiding a large language model with human-authored decision trees. This increases the transparency and explainability of the system and makes it accessible to non-experts. It also reduces hallucination risk. Since it requires less supervision than traditional classification systems, it can more easily be applied to other problems or sources of text (e.g. news) without much modification. Ornithologist measurements of hawkishness and dovishness of RBA communication carry information about the future of the cash rate path and of market expectations.